Conceptual


Q1

The null hypothesese for each parameters are whether they equals 0 respectively.

Since the p-values for Intercept, TV and radio are all smaller than 0.0001, we can reject null hypothesis for each parameter and conclude that any one of them is not 0. However, the p-value for newspaper is 0.8599, we cannot reject null hypothesis so there is no evidence that newspaper is associated with sales.

Q2

The main difference is that the output of KNN classifier assigns classification group based on the majority of cloest observations. But for KNN regression model, it outputs continous value which is the average value of cloest observations.

Q3

we can write down the fitted linear model: \(StartingSalary=50+20*GPA+0.07*IQ+35*Gender+0.01*GPA*IQ-10*GPA*Gender\)

(a)

iii is correct. The difference of salaries between female and male is \(35-10*GPA\). So when fixed GPA and IQ, we cannot decide which gender can make more money. But when GPA is high enough, the difference is smaller than 0, then we can see that males can earn more mony.

(b)

\(StartingSalary=50+20*4+0.07*110+35*1+0.01*4*110-10*4*1=137.1\)

(c)

FALSE.The range of IQ is always between 100~200, which is wider range than GPA(0~4). Therefore, although the coefficient is small, with GPA fixed, the interation term can have significant effects.

Q4

(a)

The RSS of the linear model is expected to be larger than it of the cubic model because a model with more predictors can perform better in the training dataset.

(b)

The RSS of the cubic model will be larger because of overfitting.

(c)

The RSS of the cubic model will be smaller because more parameters, less RSS in training data.

(d)

The RSS of the cubic model will be smaller because the true relationship is not linear, the cubic model can fit better.

Q5

\[ \begin{aligned} \hat{y}_k&=x_k\hat\beta \\ &=x_k(\sum_{i=1}^n x_iy_i)/(\sum_{i^{\prime}=1}^nx_{i^\prime}^2) \\ &=\sum_{i=1}^n\frac{x_ix_k}{\sum_{i^{\prime}=1}^nx_{i^\prime}^2}y_i \\ \end{aligned} \]

Therefore

\(c_i=\frac{x_ix_k}{\sum_{i^{\prime}=1}^nx_{i^\prime}^2}\)

Q6

using (3.4): \[ \hat{\beta}_1\overline{x}+\hat{\beta}_0=\hat{\beta}_1\overline{x}+\overline{y}-\hat{\beta}_1\overline{x}=\overline{y} \] This proves that the least squares line pass \((\overline{x},\overline{y})\)

Q7

For simplicity, here we assume that \(\overline{x}=\overline{y}=0\) and under this assumption, the least squares estimation: \[ \hat{\beta}_1=\frac{\sum x_iy_i}{\sum x_i^2}\\ \hat{\beta}_0=0 \] Then we compute \(R^2\): \[ \begin{aligned} R^2&=\frac{\sum_{i=1}^n \hat{y}_i^2}{\sum_{i=1}^n y_i^2}\\ &=\frac{\sum (\hat{\beta}_1x_i)^2}{\sum_{i=1}^n y_i^2}\\ &=\frac{\hat{\beta}_1^2\sum_{i=1}^n x_i^2}{\sum_{i=1}^n y_i^2}\\ &=\frac{(\sum_{i=1}^n x_iy_i)^2}{(\sum_{i=1}^n x_i^2)^2}\frac{\sum_{i=1}^n x_i^2}{\sum_{i=1}^n y_i^2}\\ &=\frac{(\sum_{i=1}^n x_iy_i)^2}{\sum_{i=1}^n x_i^2\sum_{i=1}^n y_i^2}=Cor^2(x,y) \end{aligned} \]

Applied

Q8

(a)

library(ISLR)
lm.fit=lm(mpg~horsepower,Auto)
summary(lm.fit)
## 
## Call:
## lm(formula = mpg ~ horsepower, data = Auto)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -13.5710  -3.2592  -0.3435   2.7630  16.9240 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 39.935861   0.717499   55.66   <2e-16 ***
## horsepower  -0.157845   0.006446  -24.49   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 4.906 on 390 degrees of freedom
## Multiple R-squared:  0.6059, Adjusted R-squared:  0.6049 
## F-statistic: 599.7 on 1 and 390 DF,  p-value: < 2.2e-16
predict(lm.fit,data.frame(horsepower=98),interval='prediction')
##        fit     lwr      upr
## 1 24.46708 14.8094 34.12476
confint(lm.fit)
##                 2.5 %     97.5 %
## (Intercept) 38.525212 41.3465103
## horsepower  -0.170517 -0.1451725

Horsepower and Auto has a negative linear relationship.

When horsepower increases by 100, mpg will nearly have a 15.8 increase.

When horsepower is 98, mpg will be 24.46708. The associated 95% confidence for horsepower is (-0.170517,-0.1451725), the predict (14.8094,34.12476).

(b)

attach(Auto)
plot(horsepower,mpg)
abline(lm.fit)

#### (c)

par(mfrow=c(2,2))
plot(lm.fit)

From (1,1), there is a U shape, meaning that non-linearity exists in the data. Also, the residual plots shows that the variance of error terms is not constant. From (2,2), there exists high leverage poings.

Q9

(a)

pairs(Auto)

#### (b)

cor(Auto[,-9])
##                     mpg  cylinders displacement horsepower     weight
## mpg           1.0000000 -0.7776175   -0.8051269 -0.7784268 -0.8322442
## cylinders    -0.7776175  1.0000000    0.9508233  0.8429834  0.8975273
## displacement -0.8051269  0.9508233    1.0000000  0.8972570  0.9329944
## horsepower   -0.7784268  0.8429834    0.8972570  1.0000000  0.8645377
## weight       -0.8322442  0.8975273    0.9329944  0.8645377  1.0000000
## acceleration  0.4233285 -0.5046834   -0.5438005 -0.6891955 -0.4168392
## year          0.5805410 -0.3456474   -0.3698552 -0.4163615 -0.3091199
## origin        0.5652088 -0.5689316   -0.6145351 -0.4551715 -0.5850054
##              acceleration       year     origin
## mpg             0.4233285  0.5805410  0.5652088
## cylinders      -0.5046834 -0.3456474 -0.5689316
## displacement   -0.5438005 -0.3698552 -0.6145351
## horsepower     -0.6891955 -0.4163615 -0.4551715
## weight         -0.4168392 -0.3091199 -0.5850054
## acceleration    1.0000000  0.2903161  0.2127458
## year            0.2903161  1.0000000  0.1815277
## origin          0.2127458  0.1815277  1.0000000

(c)

lm.fit2=lm(mpg~.-name,Auto)
summary(lm.fit2)
## 
## Call:
## lm(formula = mpg ~ . - name, data = Auto)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -9.5903 -2.1565 -0.1169  1.8690 13.0604 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  -17.218435   4.644294  -3.707  0.00024 ***
## cylinders     -0.493376   0.323282  -1.526  0.12780    
## displacement   0.019896   0.007515   2.647  0.00844 ** 
## horsepower    -0.016951   0.013787  -1.230  0.21963    
## weight        -0.006474   0.000652  -9.929  < 2e-16 ***
## acceleration   0.080576   0.098845   0.815  0.41548    
## year           0.750773   0.050973  14.729  < 2e-16 ***
## origin         1.426141   0.278136   5.127 4.67e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 3.328 on 384 degrees of freedom
## Multiple R-squared:  0.8215, Adjusted R-squared:  0.8182 
## F-statistic: 252.4 on 7 and 384 DF,  p-value: < 2.2e-16

Yes. Displacemnet, weight, year and origin are statistically important to the response. Let other parameters fixed, with year increasing, mpg will increase.

(d)

par(mfrow=c(2,2))
plot(lm.fit2)

The residual VS fitted plot suggests that the variances of the error terms increase with the value of the response and the non-linearity of data.

The \(\sqrt {stuResiduall}\) vs fitted plot suggests no outliers.

The redisual vs leverage plot suggests there is a leverage point 14.

(e)

#try one model
fit.lm0 <- lm(mpg~displacement*weight+year:origin, data=Auto)
summary(fit.lm0)
## 
## Call:
## lm(formula = mpg ~ displacement * weight + year:origin, data = Auto)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -14.2529  -2.4553  -0.3576   1.9435  17.8081 
## 
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)    
## (Intercept)          4.985e+01  2.425e+00  20.560  < 2e-16 ***
## displacement        -6.494e-02  1.233e-02  -5.269 2.29e-07 ***
## weight              -8.380e-03  8.667e-04  -9.668  < 2e-16 ***
## displacement:weight  1.477e-05  2.949e-06   5.009 8.35e-07 ***
## year:origin          1.166e-02  4.433e-03   2.629   0.0089 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 4.066 on 387 degrees of freedom
## Multiple R-squared:  0.7313, Adjusted R-squared:  0.7285 
## F-statistic: 263.4 on 4 and 387 DF,  p-value: < 2.2e-16

The two interaction terms appear tp be statistically important. #### (f)

lm.fit4=lm(mpg~.-name+I(weight^2),Auto)
summary(lm.fit4)
## 
## Call:
## lm(formula = mpg ~ . - name + I(weight^2), data = Auto)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -9.4706 -1.6701 -0.1488  1.6383 12.5429 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   1.479e+00  4.614e+00   0.321  0.74867    
## cylinders    -2.840e-01  2.917e-01  -0.974  0.33083    
## displacement  1.371e-02  6.793e-03   2.019  0.04418 *  
## horsepower   -2.435e-02  1.243e-02  -1.959  0.05083 .  
## weight       -2.049e-02  1.580e-03 -12.970  < 2e-16 ***
## acceleration  6.571e-02  8.895e-02   0.739  0.46055    
## year          7.999e-01  4.615e-02  17.331  < 2e-16 ***
## origin        7.418e-01  2.603e-01   2.850  0.00461 ** 
## I(weight^2)   2.237e-06  2.341e-07   9.556  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.994 on 383 degrees of freedom
## Multiple R-squared:  0.8558, Adjusted R-squared:  0.8528 
## F-statistic: 284.2 on 8 and 383 DF,  p-value: < 2.2e-16

The square of weight is also significant.

Q10

(a)

lm.fit=lm(Sales~Price+Urban+US,Carseats)

(b)

summary(lm.fit)
## 
## Call:
## lm(formula = Sales ~ Price + Urban + US, data = Carseats)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -6.9206 -1.6220 -0.0564  1.5786  7.0581 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) 13.043469   0.651012  20.036  < 2e-16 ***
## Price       -0.054459   0.005242 -10.389  < 2e-16 ***
## UrbanYes    -0.021916   0.271650  -0.081    0.936    
## USYes        1.200573   0.259042   4.635 4.86e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 2.472 on 396 degrees of freedom
## Multiple R-squared:  0.2393, Adjusted R-squared:  0.2335 
## F-statistic: 41.52 on 3 and 396 DF,  p-value: < 2.2e-16

Sales: sales in thousands at each location

Price: price charged for car seats at each location

Urban: No/Yes by location

US: No/Yes by location

Coefficients for

  • Price (-0.054459): Sales drop by 54 for each dollar increase in Price - statistically significant
  • UrbanYes (-0.021916): Sales are 22 lower for Urban locations - not statistically significant
  • USYes (1.200573): Sales are 1,201 higher in the US locations - statistically significant

(c)

Test the coding form of dummy variables.

contrasts(Carseats$Urban)
##     Yes
## No    0
## Yes   1
contrasts(Carseats$US)
##     Yes
## No    0
## Yes   1

\[ Sales_i=\beta_0+\beta_1Price_i+ \begin{cases} \beta_2+\beta_3 & Urban=Yes, US=Yes \\ \beta_2 & Urban=Yes, US=No\\ \beta_3 & Urban=No, US=Yes \\ 0 & Urban=No, US=No \end{cases} \]

(d)

We can reject the null hypothesis of Price and US.

(e)

lm.fit2=lm(Sales~Price+US,Carseats)

(f)

anova(lm.fit,lm.fit2)  #鎴笺工鏀笺副鎴笺腹鏄笺耿鎴笺父鏄笺埂鎸笺埂鏀笺复鎸笺父鏀笺耿model顺鎼笺赴鏄笺覆鎸笺攻薰鎼笺父
## Analysis of Variance Table
## 
## Model 1: Sales ~ Price + Urban + US
## Model 2: Sales ~ Price + US
##   Res.Df    RSS Df Sum of Sq      F Pr(>F)
## 1    396 2420.8                           
## 2    397 2420.9 -1  -0.03979 0.0065 0.9357

Using Anova, the simplified model can fit data better. #### (g)

confint(lm.fit2)
##                   2.5 %      97.5 %
## (Intercept) 11.79032020 14.27126531
## Price       -0.06475984 -0.04419543
## USYes        0.69151957  1.70776632

(h)

par(mfrow=c(2,2))
# residuals v fitted plot doesn't show strong outliers
plot(lm.fit2)  

No evidence of outliers.

The (2,2) can show there exists leverage point. (\(The threshhold for leverage statistics: 3/400=0.0075\))

library(car)
## Loading required package: carData
leveragePlots(lm.fit2)

Q11

(a)

set.seed(1)
x=rnorm(100)
y=2*x+rnorm(100)

lm.fit=lm(y~x+0)
summary(lm.fit)
## 
## Call:
## lm(formula = y ~ x + 0)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.9154 -0.6472 -0.1771  0.5056  2.3109 
## 
## Coefficients:
##   Estimate Std. Error t value Pr(>|t|)    
## x   1.9939     0.1065   18.73   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.9586 on 99 degrees of freedom
## Multiple R-squared:  0.7798, Adjusted R-squared:  0.7776 
## F-statistic: 350.7 on 1 and 99 DF,  p-value: < 2.2e-16

The p-value of this parameter is very small, so we can reject null hypothesis and conclude that x has relationship with y.

(b)

lm.fit2=lm(x~y+0)
summary(lm.fit2)
## 
## Call:
## lm(formula = x ~ y + 0)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.8699 -0.2368  0.1030  0.2858  0.8938 
## 
## Coefficients:
##   Estimate Std. Error t value Pr(>|t|)    
## y  0.39111    0.02089   18.73   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4246 on 99 degrees of freedom
## Multiple R-squared:  0.7798, Adjusted R-squared:  0.7776 
## F-statistic: 350.7 on 1 and 99 DF,  p-value: < 2.2e-16

The p-value of this parameter is very small, so we can reject null hypothesis and conclude that x has relationship with y.

(c)

Their \({\rm R}^2\), adjusted \({\rm R}^2\), t value and F-statistic are equal. #### (d) From Q5, we know \[ \hat{\beta}=(\sum_{i=1}^nx_iy_i)/(\sum_{i^\prime=1}^nx_{i^\prime}^2) \] Then we can get: \[ \begin{aligned} t-statistic&=\frac{\hat{\beta}}{SE(\hat{\beta})}\\ &=\frac{\sum x_iy_i}{\sum x_{i}^2} \sqrt{\frac{(n-1)\sum x_i^2}{\sum (y_i-x_i\hat\beta)^2}}\\ &=\frac{\sum x_iy_i\sqrt{n-1}}{\sqrt{\sum x_i^2}\sqrt{\sum(y_i-x_i\hat\beta)^2}}\\ &=\frac{\sum x_iy_i\sqrt{n-1}}{\sqrt{\sum x_i^2}\sqrt{\sum y_i^2-\frac{(\sum x_iy_i)^2}{\sum x_i^2}}}\\ &=\frac{\sum x_iy_i\sqrt{n-1}}{\sqrt{(\sum x_i^2)(\sum y_i^2)-(\sum x_iy_i)^2}} \end{aligned} \]

(e)

From (d), we can see the equation for t-value is symmetrical interms of x and y, therefore we can get same t-statistic value.

(f)

lm.fit=lm(y~x)
summary(lm.fit)
## 
## Call:
## lm(formula = y ~ x)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8768 -0.6138 -0.1395  0.5394  2.3462 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -0.03769    0.09699  -0.389    0.698    
## x            1.99894    0.10773  18.556   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.9628 on 98 degrees of freedom
## Multiple R-squared:  0.7784, Adjusted R-squared:  0.7762 
## F-statistic: 344.3 on 1 and 98 DF,  p-value: < 2.2e-16
lm.fit2=lm(x~y)
summary(lm.fit2)
## 
## Call:
## lm(formula = x ~ y)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.90848 -0.28101  0.06274  0.24570  0.85736 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.03880    0.04266    0.91    0.365    
## y            0.38942    0.02099   18.56   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.4249 on 98 degrees of freedom
## Multiple R-squared:  0.7784, Adjusted R-squared:  0.7762 
## F-statistic: 344.3 on 1 and 98 DF,  p-value: < 2.2e-16

Q12

(a)

when \(\sum x_i^2=\sum y_i^2\)

(b)

x=rnorm(100)
y=rnorm(100,1)
summary(lm(y~x+0))
## 
## Call:
## lm(formula = y ~ x + 0)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.7213  0.4923  1.0455  1.7382  2.9640 
## 
## Coefficients:
##   Estimate Std. Error t value Pr(>|t|)
## x   0.1356     0.1401   0.968    0.336
## 
## Residual standard error: 1.442 on 99 degrees of freedom
## Multiple R-squared:  0.009368,   Adjusted R-squared:  -0.0006383 
## F-statistic: 0.9362 on 1 and 99 DF,  p-value: 0.3356
summary(lm(x~y+0))
## 
## Call:
## lm(formula = x ~ y + 0)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.84960 -0.56261 -0.03628  0.58839  2.58482 
## 
## Coefficients:
##   Estimate Std. Error t value Pr(>|t|)
## y  0.06910    0.07142   0.968    0.336
## 
## Residual standard error: 1.03 on 99 degrees of freedom
## Multiple R-squared:  0.009368,   Adjusted R-squared:  -0.0006383 
## F-statistic: 0.9362 on 1 and 99 DF,  p-value: 0.3356

Different coefficient estimation.

(c)

x=rnorm(100)
y = sample(x, replace = FALSE, 100)
summary(lm(y~x+0))
## 
## Call:
## lm(formula = y ~ x + 0)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.0208 -0.8016 -0.1336  0.6094  3.8259 
## 
## Coefficients:
##   Estimate Std. Error t value Pr(>|t|)
## x -0.05076    0.10037  -0.506    0.614
## 
## Residual standard error: 1.169 on 99 degrees of freedom
## Multiple R-squared:  0.002577,   Adjusted R-squared:  -0.007498 
## F-statistic: 0.2557 on 1 and 99 DF,  p-value: 0.6142
summary(lm(x~y+0))
## 
## Call:
## lm(formula = x ~ y + 0)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -3.0095 -0.7707 -0.1270  0.6319  3.8207 
## 
## Coefficients:
##   Estimate Std. Error t value Pr(>|t|)
## y -0.05076    0.10037  -0.506    0.614
## 
## Residual standard error: 1.169 on 99 degrees of freedom
## Multiple R-squared:  0.002577,   Adjusted R-squared:  -0.007498 
## F-statistic: 0.2557 on 1 and 99 DF,  p-value: 0.6142

Same coefficient estimation.

Q13

(a)

set.seed(1)
x=rnorm(100)

(b)

eps=rnorm(100,0,0.25)

(c)

y=-1+0.5*x+eps

Length of y is 100. \(\beta_0\) is -1 and \(\beta_1\) is 0.5

(d)

plot(x,y)

There is a positive linear relationship between x and y.

(e)

lm.fit=lm(y~x)
summary(lm.fit)
## 
## Call:
## lm(formula = y ~ x)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.46921 -0.15344 -0.03487  0.13485  0.58654 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -1.00942    0.02425  -41.63   <2e-16 ***
## x            0.49973    0.02693   18.56   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.2407 on 98 degrees of freedom
## Multiple R-squared:  0.7784, Adjusted R-squared:  0.7762 
## F-statistic: 344.3 on 1 and 98 DF,  p-value: < 2.2e-16

\(\hat\beta_0\) and \(\hat\beta_1\) are simillar to \(\beta_0\) and \(\beta_1\).

(f)

plot(x,y)
abline(-1, 0.5, col="blue")  # true regression
abline(lm.fit, col="red")    # fitted regression
legend(x = c(1,1),
       y = c(-2,-2),
       legend = c("population", "model fit"),
       col = c("blue","red"), lwd=3 )

(g)

lm.fit2=lm(y~poly(x,2))
summary(lm.fit2)
## 
## Call:
## lm(formula = y ~ poly(x, 2))
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.4913 -0.1563 -0.0322  0.1451  0.5675 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -0.95501    0.02395 -39.874   <2e-16 ***
## poly(x, 2)1  4.46612    0.23951  18.647   <2e-16 ***
## poly(x, 2)2 -0.33602    0.23951  -1.403    0.164    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.2395 on 97 degrees of freedom
## Multiple R-squared:  0.7828, Adjusted R-squared:  0.7784 
## F-statistic: 174.8 on 2 and 97 DF,  p-value: < 2.2e-16

From the small p-value of F-statistic, we can reject null hypothesis and say that the model with quadratic term can perform better.

(h)

We decrease the cariance of error term to 0.5

set.seed(1)
x=rnorm(100)
eps=rnorm(100,0,0.1)
y=-1+0.5*x+eps
lm.fit2=lm(y~x)
summary(lm.fit2)
## 
## Call:
## lm(formula = y ~ x)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.18768 -0.06138 -0.01395  0.05394  0.23462 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -1.003769   0.009699  -103.5   <2e-16 ***
## x            0.499894   0.010773    46.4   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.09628 on 98 degrees of freedom
## Multiple R-squared:  0.9565, Adjusted R-squared:  0.956 
## F-statistic:  2153 on 1 and 98 DF,  p-value: < 2.2e-16
plot(x,y)
abline(-1, 0.5, col="blue")  # true regression
abline(lm.fit2, col="red")    # fitted regression
legend(x = c(1,1),
       y = c(-2,-2),
       legend = c("population", "model fit"),
       col = c("blue","red"), lwd=3 )

The regression model is different from previous one.

(i)

We increase the cariance of error term to 0.5

set.seed(1)
x=rnorm(100)
eps=rnorm(100,0,0.4)
y=-1+0.5*x+eps
lm.fit3=lm(y~x)
summary(lm.fit3)
## 
## Call:
## lm(formula = y ~ x)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -0.7507 -0.2455 -0.0558  0.2158  0.9385 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept) -1.01508    0.03879  -26.16   <2e-16 ***
## x            0.49958    0.04309   11.59   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.3851 on 98 degrees of freedom
## Multiple R-squared:  0.5783, Adjusted R-squared:  0.574 
## F-statistic: 134.4 on 1 and 98 DF,  p-value: < 2.2e-16
plot(x,y)
abline(-1, 0.5, col="blue")  # true regression
abline(lm.fit3, col="red")    # fitted regression
legend(x = c(1,1),
       y = c(-2,-2),
       legend = c("population", "model fit"),
       col = c("blue","red"), lwd=3 )

The regression model is different previous one.

(j)

confint(lm.fit)
##                  2.5 %     97.5 %
## (Intercept) -1.0575402 -0.9613061
## x            0.4462897  0.5531801
confint(lm.fit2)
##                  2.5 %     97.5 %
## (Intercept) -1.0230161 -0.9845224
## x            0.4785159  0.5212720
confint(lm.fit3)
##                  2.5 %     97.5 %
## (Intercept) -1.0920643 -0.9380898
## x            0.4140635  0.5850882
rm(list = setdiff(ls(), lsf.str()))

The confidence intervals of three models are different, less noise data< noise data< more noise data rm(list = setdiff(ls(), lsf.str()))

Q14

(a)

set.seed(1)
x1=runif(100)
x2=0.5*x1+rnorm(100)/10
y=2+2*x1+0.3*x2+rnorm(100)

\[y=\beta_0+\beta_1x_1+\beta_2x_2+\epsilon\]

Regression coefficients is \(\beta_1\) and \(\beta2\)

(b)

plot(x1,x2)

(c)

lm.fit=lm(y~x1+x2)
summary(lm.fit)
## 
## Call:
## lm(formula = y ~ x1 + x2)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.8311 -0.7273 -0.0537  0.6338  2.3359 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   2.1305     0.2319   9.188 7.61e-15 ***
## x1            1.4396     0.7212   1.996   0.0487 *  
## x2            1.0097     1.1337   0.891   0.3754    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.056 on 97 degrees of freedom
## Multiple R-squared:  0.2088, Adjusted R-squared:  0.1925 
## F-statistic:  12.8 on 2 and 97 DF,  p-value: 1.164e-05

\(\hat\beta_0\) is 2.1305. \(\hat\beta_1\) is 1.4396. \(\hat\beta_2\) is 1.0097.

We can reject the null hypotheses of \(\hat\beta_0\) and \(\hat\beta_1\).

(d)

lm.fit=lm(y~x1)
summary(lm.fit)
## 
## Call:
## lm(formula = y ~ x1)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.89495 -0.66874 -0.07785  0.59221  2.45560 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   2.1124     0.2307   9.155 8.27e-15 ***
## x1            1.9759     0.3963   4.986 2.66e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.055 on 98 degrees of freedom
## Multiple R-squared:  0.2024, Adjusted R-squared:  0.1942 
## F-statistic: 24.86 on 1 and 98 DF,  p-value: 2.661e-06

Yes.

(e)

lm.fit=lm(y~x2)
summary(lm.fit)
## 
## Call:
## lm(formula = y ~ x2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.62687 -0.75156 -0.03598  0.72383  2.44890 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   2.3899     0.1949   12.26  < 2e-16 ***
## x2            2.8996     0.6330    4.58 1.37e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.072 on 98 degrees of freedom
## Multiple R-squared:  0.1763, Adjusted R-squared:  0.1679 
## F-statistic: 20.98 on 1 and 98 DF,  p-value: 1.366e-05

Yes.

(f)

No. Without the presence of other predictors, both \(\beta_1\) and \(\beta_2\) are statistically significant. In the presence of other predictors, \(\beta_2\) is no longer statistically significant.

(g)

x1=c(x1,0.1)
x2=c(x2,0.8)
y=c(y,6)

par(mfrow = c(2,2))
lm.fit4 = lm(y ~ x1 + x2)
summary(lm.fit4)
## 
## Call:
## lm(formula = y ~ x1 + x2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.73348 -0.69318 -0.05263  0.66385  2.30619 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   2.2267     0.2314   9.624 7.91e-16 ***
## x1            0.5394     0.5922   0.911  0.36458    
## x2            2.5146     0.8977   2.801  0.00614 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.075 on 98 degrees of freedom
## Multiple R-squared:  0.2188, Adjusted R-squared:  0.2029 
## F-statistic: 13.72 on 2 and 98 DF,  p-value: 5.564e-06
plot(lm.fit4)

lm.fit5 = lm(y ~ x1)
summary(lm.fit5)
## 
## Call:
## lm(formula = y ~ x1)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.8897 -0.6556 -0.0909  0.5682  3.5665 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   2.2569     0.2390   9.445 1.78e-15 ***
## x1            1.7657     0.4124   4.282 4.29e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.111 on 99 degrees of freedom
## Multiple R-squared:  0.1562, Adjusted R-squared:  0.1477 
## F-statistic: 18.33 on 1 and 99 DF,  p-value: 4.295e-05
plot(lm.fit5)

lm.fit6 = lm(y ~ x2)
summary(lm.fit6)
## 
## Call:
## lm(formula = y ~ x2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.64729 -0.71021 -0.06899  0.72699  2.38074 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   2.3451     0.1912  12.264  < 2e-16 ***
## x2            3.1190     0.6040   5.164 1.25e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.074 on 99 degrees of freedom
## Multiple R-squared:  0.2122, Adjusted R-squared:  0.2042 
## F-statistic: 26.66 on 1 and 99 DF,  p-value: 1.253e-06
plot(lm.fit6)

When we add this point, it appears to be an outlier, beacause of a decreasing in \({\rm R}^2\) of simple linear model on \(x_1\).

The third plot shows that the newly added observation in \(x_2\) is a high leverage point.

wrong2=lm(y~x1)
summary(wrong2)
## 
## Call:
## lm(formula = y ~ x1)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -2.8897 -0.6556 -0.0909  0.5682  3.5665 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   2.2569     0.2390   9.445 1.78e-15 ***
## x1            1.7657     0.4124   4.282 4.29e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.111 on 99 degrees of freedom
## Multiple R-squared:  0.1562, Adjusted R-squared:  0.1477 
## F-statistic: 18.33 on 1 and 99 DF,  p-value: 4.295e-05
par(mfrow=c(2,2))
plot(wrong2)

wrong3=lm(y~x2)
summary(wrong3)
## 
## Call:
## lm(formula = y ~ x2)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -2.64729 -0.71021 -0.06899  0.72699  2.38074 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   2.3451     0.1912  12.264  < 2e-16 ***
## x2            3.1190     0.6040   5.164 1.25e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.074 on 99 degrees of freedom
## Multiple R-squared:  0.2122, Adjusted R-squared:  0.2042 
## F-statistic: 26.66 on 1 and 99 DF,  p-value: 1.253e-06
par(mfrow=c(2,2))
plot(wrong3)

Q15

(a)

library(MASS)
head(Boston)
##      crim zn indus chas   nox    rm  age    dis rad tax ptratio  black
## 1 0.00632 18  2.31    0 0.538 6.575 65.2 4.0900   1 296    15.3 396.90
## 2 0.02731  0  7.07    0 0.469 6.421 78.9 4.9671   2 242    17.8 396.90
## 3 0.02729  0  7.07    0 0.469 7.185 61.1 4.9671   2 242    17.8 392.83
## 4 0.03237  0  2.18    0 0.458 6.998 45.8 6.0622   3 222    18.7 394.63
## 5 0.06905  0  2.18    0 0.458 7.147 54.2 6.0622   3 222    18.7 396.90
## 6 0.02985  0  2.18    0 0.458 6.430 58.7 6.0622   3 222    18.7 394.12
##   lstat medv
## 1  4.98 24.0
## 2  9.14 21.6
## 3  4.03 34.7
## 4  2.94 33.4
## 5  5.33 36.2
## 6  5.21 28.7
var=names(Boston[,-1])

lmf<-function(predictor){
  print(predictor)
  summary(lm(Boston$crim~Boston[,predictor]))
  
}

lapply(var,lmf)
## [1] "zn"
## [1] "indus"
## [1] "chas"
## [1] "nox"
## [1] "rm"
## [1] "age"
## [1] "dis"
## [1] "rad"
## [1] "tax"
## [1] "ptratio"
## [1] "black"
## [1] "lstat"
## [1] "medv"
## [[1]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -4.429 -4.222 -2.620  1.250 84.523 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)          4.45369    0.41722  10.675  < 2e-16 ***
## Boston[, predictor] -0.07393    0.01609  -4.594 5.51e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.435 on 504 degrees of freedom
## Multiple R-squared:  0.04019,    Adjusted R-squared:  0.03828 
## F-statistic:  21.1 on 1 and 504 DF,  p-value: 5.506e-06
## 
## 
## [[2]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -11.972  -2.698  -0.736   0.712  81.813 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         -2.06374    0.66723  -3.093  0.00209 ** 
## Boston[, predictor]  0.50978    0.05102   9.991  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.866 on 504 degrees of freedom
## Multiple R-squared:  0.1653, Adjusted R-squared:  0.1637 
## F-statistic: 99.82 on 1 and 504 DF,  p-value: < 2.2e-16
## 
## 
## [[3]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -3.738 -3.661 -3.435  0.018 85.232 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)           3.7444     0.3961   9.453   <2e-16 ***
## Boston[, predictor]  -1.8928     1.5061  -1.257    0.209    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.597 on 504 degrees of freedom
## Multiple R-squared:  0.003124,   Adjusted R-squared:  0.001146 
## F-statistic: 1.579 on 1 and 504 DF,  p-value: 0.2094
## 
## 
## [[4]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -12.371  -2.738  -0.974   0.559  81.728 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)          -13.720      1.699  -8.073 5.08e-15 ***
## Boston[, predictor]   31.249      2.999  10.419  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.81 on 504 degrees of freedom
## Multiple R-squared:  0.1772, Adjusted R-squared:  0.1756 
## F-statistic: 108.6 on 1 and 504 DF,  p-value: < 2.2e-16
## 
## 
## [[5]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -6.604 -3.952 -2.654  0.989 87.197 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)           20.482      3.365   6.088 2.27e-09 ***
## Boston[, predictor]   -2.684      0.532  -5.045 6.35e-07 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.401 on 504 degrees of freedom
## Multiple R-squared:  0.04807,    Adjusted R-squared:  0.04618 
## F-statistic: 25.45 on 1 and 504 DF,  p-value: 6.347e-07
## 
## 
## [[6]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -6.789 -4.257 -1.230  1.527 82.849 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         -3.77791    0.94398  -4.002 7.22e-05 ***
## Boston[, predictor]  0.10779    0.01274   8.463 2.85e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.057 on 504 degrees of freedom
## Multiple R-squared:  0.1244, Adjusted R-squared:  0.1227 
## F-statistic: 71.62 on 1 and 504 DF,  p-value: 2.855e-16
## 
## 
## [[7]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -6.708 -4.134 -1.527  1.516 81.674 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)           9.4993     0.7304  13.006   <2e-16 ***
## Boston[, predictor]  -1.5509     0.1683  -9.213   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.965 on 504 degrees of freedom
## Multiple R-squared:  0.1441, Adjusted R-squared:  0.1425 
## F-statistic: 84.89 on 1 and 504 DF,  p-value: < 2.2e-16
## 
## 
## [[8]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -10.164  -1.381  -0.141   0.660  76.433 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         -2.28716    0.44348  -5.157 3.61e-07 ***
## Boston[, predictor]  0.61791    0.03433  17.998  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 6.718 on 504 degrees of freedom
## Multiple R-squared:  0.3913, Adjusted R-squared:   0.39 
## F-statistic: 323.9 on 1 and 504 DF,  p-value: < 2.2e-16
## 
## 
## [[9]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -12.513  -2.738  -0.194   1.065  77.696 
## 
## Coefficients:
##                      Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         -8.528369   0.815809  -10.45   <2e-16 ***
## Boston[, predictor]  0.029742   0.001847   16.10   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 6.997 on 504 degrees of freedom
## Multiple R-squared:  0.3396, Adjusted R-squared:  0.3383 
## F-statistic: 259.2 on 1 and 504 DF,  p-value: < 2.2e-16
## 
## 
## [[10]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -7.654 -3.985 -1.912  1.825 83.353 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         -17.6469     3.1473  -5.607 3.40e-08 ***
## Boston[, predictor]   1.1520     0.1694   6.801 2.94e-11 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.24 on 504 degrees of freedom
## Multiple R-squared:  0.08407,    Adjusted R-squared:  0.08225 
## F-statistic: 46.26 on 1 and 504 DF,  p-value: 2.943e-11
## 
## 
## [[11]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -13.756  -2.299  -2.095  -1.296  86.822 
## 
## Coefficients:
##                      Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         16.553529   1.425903  11.609   <2e-16 ***
## Boston[, predictor] -0.036280   0.003873  -9.367   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.946 on 504 degrees of freedom
## Multiple R-squared:  0.1483, Adjusted R-squared:  0.1466 
## F-statistic: 87.74 on 1 and 504 DF,  p-value: < 2.2e-16
## 
## 
## [[12]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -13.925  -2.822  -0.664   1.079  82.862 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         -3.33054    0.69376  -4.801 2.09e-06 ***
## Boston[, predictor]  0.54880    0.04776  11.491  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.664 on 504 degrees of freedom
## Multiple R-squared:  0.2076, Adjusted R-squared:  0.206 
## F-statistic:   132 on 1 and 504 DF,  p-value: < 2.2e-16
## 
## 
## [[13]]
## 
## Call:
## lm(formula = Boston$crim ~ Boston[, predictor])
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -9.071 -4.022 -2.343  1.298 80.957 
## 
## Coefficients:
##                     Estimate Std. Error t value Pr(>|t|)    
## (Intercept)         11.79654    0.93419   12.63   <2e-16 ***
## Boston[, predictor] -0.36316    0.03839   -9.46   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.934 on 504 degrees of freedom
## Multiple R-squared:  0.1508, Adjusted R-squared:  0.1491 
## F-statistic: 89.49 on 1 and 504 DF,  p-value: < 2.2e-16

Except chas, any other predictor has significant association with the response.

(b)

lm.fit=lm(crim~.,Boston)
summary(lm.fit)$fstatistic
##     value     numdf     dendf 
##  31.47047  13.00000 492.00000

We can reject the null hypothesis of zn, dis, rad, black and medv.

(c)

simplelm=vector('numeric')
lmfc<-function(predictor){
  x=lm(Boston$crim~Boston[,predictor])
  coef=x$coefficients[2]
  return(coef)
}
uni=unlist(lapply(var,lmfc))
multi=lm(crim~.,Boston)$coefficients[2:14]
plot(uni,multi)

(d)

summary(lm(crim~poly(zn,3), data=Boston))      # 1,2
## 
## Call:
## lm(formula = crim ~ poly(zn, 3), data = Boston)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -4.821 -4.614 -1.294  0.473 84.130 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    3.6135     0.3722   9.709  < 2e-16 ***
## poly(zn, 3)1 -38.7498     8.3722  -4.628  4.7e-06 ***
## poly(zn, 3)2  23.9398     8.3722   2.859  0.00442 ** 
## poly(zn, 3)3 -10.0719     8.3722  -1.203  0.22954    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.372 on 502 degrees of freedom
## Multiple R-squared:  0.05824,    Adjusted R-squared:  0.05261 
## F-statistic: 10.35 on 3 and 502 DF,  p-value: 1.281e-06
summary(lm(crim~poly(indus,3), data=Boston))   # 1,2,3
## 
## Call:
## lm(formula = crim ~ poly(indus, 3), data = Boston)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -8.278 -2.514  0.054  0.764 79.713 
## 
## Coefficients:
##                 Estimate Std. Error t value Pr(>|t|)    
## (Intercept)        3.614      0.330  10.950  < 2e-16 ***
## poly(indus, 3)1   78.591      7.423  10.587  < 2e-16 ***
## poly(indus, 3)2  -24.395      7.423  -3.286  0.00109 ** 
## poly(indus, 3)3  -54.130      7.423  -7.292  1.2e-12 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.423 on 502 degrees of freedom
## Multiple R-squared:  0.2597, Adjusted R-squared:  0.2552 
## F-statistic: 58.69 on 3 and 502 DF,  p-value: < 2.2e-16
summary(lm(crim~poly(nox,3), data=Boston))     # 1,2,3
## 
## Call:
## lm(formula = crim ~ poly(nox, 3), data = Boston)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -9.110 -2.068 -0.255  0.739 78.302 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     3.6135     0.3216  11.237  < 2e-16 ***
## poly(nox, 3)1  81.3720     7.2336  11.249  < 2e-16 ***
## poly(nox, 3)2 -28.8286     7.2336  -3.985 7.74e-05 ***
## poly(nox, 3)3 -60.3619     7.2336  -8.345 6.96e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.234 on 502 degrees of freedom
## Multiple R-squared:  0.297,  Adjusted R-squared:  0.2928 
## F-statistic: 70.69 on 3 and 502 DF,  p-value: < 2.2e-16
summary(lm(crim~poly(rm,3), data=Boston))      # 1,2
## 
## Call:
## lm(formula = crim ~ poly(rm, 3), data = Boston)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -18.485  -3.468  -2.221  -0.015  87.219 
## 
## Coefficients:
##              Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    3.6135     0.3703   9.758  < 2e-16 ***
## poly(rm, 3)1 -42.3794     8.3297  -5.088 5.13e-07 ***
## poly(rm, 3)2  26.5768     8.3297   3.191  0.00151 ** 
## poly(rm, 3)3  -5.5103     8.3297  -0.662  0.50858    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.33 on 502 degrees of freedom
## Multiple R-squared:  0.06779,    Adjusted R-squared:  0.06222 
## F-statistic: 12.17 on 3 and 502 DF,  p-value: 1.067e-07
summary(lm(crim~poly(age,3), data=Boston))     # 1,2,3
## 
## Call:
## lm(formula = crim ~ poly(age, 3), data = Boston)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -9.762 -2.673 -0.516  0.019 82.842 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     3.6135     0.3485  10.368  < 2e-16 ***
## poly(age, 3)1  68.1820     7.8397   8.697  < 2e-16 ***
## poly(age, 3)2  37.4845     7.8397   4.781 2.29e-06 ***
## poly(age, 3)3  21.3532     7.8397   2.724  0.00668 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.84 on 502 degrees of freedom
## Multiple R-squared:  0.1742, Adjusted R-squared:  0.1693 
## F-statistic: 35.31 on 3 and 502 DF,  p-value: < 2.2e-16
summary(lm(crim~poly(dis,3), data=Boston))     # 1,2,3
## 
## Call:
## lm(formula = crim ~ poly(dis, 3), data = Boston)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -10.757  -2.588   0.031   1.267  76.378 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     3.6135     0.3259  11.087  < 2e-16 ***
## poly(dis, 3)1 -73.3886     7.3315 -10.010  < 2e-16 ***
## poly(dis, 3)2  56.3730     7.3315   7.689 7.87e-14 ***
## poly(dis, 3)3 -42.6219     7.3315  -5.814 1.09e-08 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.331 on 502 degrees of freedom
## Multiple R-squared:  0.2778, Adjusted R-squared:  0.2735 
## F-statistic: 64.37 on 3 and 502 DF,  p-value: < 2.2e-16
summary(lm(crim~poly(rad,3), data=Boston))     # 1,2
## 
## Call:
## lm(formula = crim ~ poly(rad, 3), data = Boston)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -10.381  -0.412  -0.269   0.179  76.217 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     3.6135     0.2971  12.164  < 2e-16 ***
## poly(rad, 3)1 120.9074     6.6824  18.093  < 2e-16 ***
## poly(rad, 3)2  17.4923     6.6824   2.618  0.00912 ** 
## poly(rad, 3)3   4.6985     6.6824   0.703  0.48231    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 6.682 on 502 degrees of freedom
## Multiple R-squared:    0.4,  Adjusted R-squared:  0.3965 
## F-statistic: 111.6 on 3 and 502 DF,  p-value: < 2.2e-16
summary(lm(crim~poly(tax,3), data=Boston))     # 1,2
## 
## Call:
## lm(formula = crim ~ poly(tax, 3), data = Boston)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -13.273  -1.389   0.046   0.536  76.950 
## 
## Coefficients:
##               Estimate Std. Error t value Pr(>|t|)    
## (Intercept)     3.6135     0.3047  11.860  < 2e-16 ***
## poly(tax, 3)1 112.6458     6.8537  16.436  < 2e-16 ***
## poly(tax, 3)2  32.0873     6.8537   4.682 3.67e-06 ***
## poly(tax, 3)3  -7.9968     6.8537  -1.167    0.244    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 6.854 on 502 degrees of freedom
## Multiple R-squared:  0.3689, Adjusted R-squared:  0.3651 
## F-statistic:  97.8 on 3 and 502 DF,  p-value: < 2.2e-16
summary(lm(crim~poly(ptratio,3), data=Boston)) # 1,2,3
## 
## Call:
## lm(formula = crim ~ poly(ptratio, 3), data = Boston)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -6.833 -4.146 -1.655  1.408 82.697 
## 
## Coefficients:
##                   Estimate Std. Error t value Pr(>|t|)    
## (Intercept)          3.614      0.361  10.008  < 2e-16 ***
## poly(ptratio, 3)1   56.045      8.122   6.901 1.57e-11 ***
## poly(ptratio, 3)2   24.775      8.122   3.050  0.00241 ** 
## poly(ptratio, 3)3  -22.280      8.122  -2.743  0.00630 ** 
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 8.122 on 502 degrees of freedom
## Multiple R-squared:  0.1138, Adjusted R-squared:  0.1085 
## F-statistic: 21.48 on 3 and 502 DF,  p-value: 4.171e-13
summary(lm(crim~poly(black,3), data=Boston))   # 1
## 
## Call:
## lm(formula = crim ~ poly(black, 3), data = Boston)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -13.096  -2.343  -2.128  -1.439  86.790 
## 
## Coefficients:
##                 Estimate Std. Error t value Pr(>|t|)    
## (Intercept)       3.6135     0.3536  10.218   <2e-16 ***
## poly(black, 3)1 -74.4312     7.9546  -9.357   <2e-16 ***
## poly(black, 3)2   5.9264     7.9546   0.745    0.457    
## poly(black, 3)3  -4.8346     7.9546  -0.608    0.544    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.955 on 502 degrees of freedom
## Multiple R-squared:  0.1498, Adjusted R-squared:  0.1448 
## F-statistic: 29.49 on 3 and 502 DF,  p-value: < 2.2e-16
summary(lm(crim~poly(lstat,3), data=Boston))   # 1,2
## 
## Call:
## lm(formula = crim ~ poly(lstat, 3), data = Boston)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -15.234  -2.151  -0.486   0.066  83.353 
## 
## Coefficients:
##                 Estimate Std. Error t value Pr(>|t|)    
## (Intercept)       3.6135     0.3392  10.654   <2e-16 ***
## poly(lstat, 3)1  88.0697     7.6294  11.543   <2e-16 ***
## poly(lstat, 3)2  15.8882     7.6294   2.082   0.0378 *  
## poly(lstat, 3)3 -11.5740     7.6294  -1.517   0.1299    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 7.629 on 502 degrees of freedom
## Multiple R-squared:  0.2179, Adjusted R-squared:  0.2133 
## F-statistic: 46.63 on 3 and 502 DF,  p-value: < 2.2e-16
summary(lm(crim~poly(medv,3), data=Boston))    # 1,2,3
## 
## Call:
## lm(formula = crim ~ poly(medv, 3), data = Boston)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -24.427  -1.976  -0.437   0.439  73.655 
## 
## Coefficients:
##                Estimate Std. Error t value Pr(>|t|)    
## (Intercept)       3.614      0.292  12.374  < 2e-16 ***
## poly(medv, 3)1  -75.058      6.569 -11.426  < 2e-16 ***
## poly(medv, 3)2   88.086      6.569  13.409  < 2e-16 ***
## poly(medv, 3)3  -48.033      6.569  -7.312 1.05e-12 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 6.569 on 502 degrees of freedom
## Multiple R-squared:  0.4202, Adjusted R-squared:  0.4167 
## F-statistic: 121.3 on 3 and 502 DF,  p-value: < 2.2e-16

Here we overlook chas because it is factor variable.

The numbers on the right side show the significant order for the predictor.